Demonstration of Probabilistic Robotics

 

by Jef Mangelschots

During the March 2010 meeting of RSSCRainer Hessmer demonstrated a simulation of the concepts of "Probabilistic Robotics" by Sebastian Thrun from Stanford University, who leaded the team to win the DARPA Grand Challenge. 

This is a report on his presentation:

In order to better understand the powerful concepts of Probabilistic Robotics, he developed a simulator tho better understand the theory. This link brings you to his website where you can find his program.

Probabilistic Robotics deals with the fact that robots are far from perfect. There can be a big discrepancy by what you commanded your robot to do and what it actually does and a difference between what is actually out there and how the robot perceives it. Wheels can slip. Left and right motors turn at different speeds under same voltage applied in dead-reckoning. E.g. even when a motor is commanded to run straight at 2 m/s, the actual displacement is influenced by many factors. The actual displacement ends up being a Gaussian curve.

Sensors are not perfect and have different responses under different circumstances. Sensors also have a minimum and maximum range, i.e. they can not see the whole environment. Same as with motors, their output is Gaussian depending on their input. E.g. taking a distance reading at different times under exactly the same conditions, can still yield different results. This is illustrated in the following diagram.

A robot is placed at an unknown location in a room of which the layout is stored in its database. The robot has no idea where in the room it is. Another limiting factor is that the robot is equipped with a limited range sensor that can scan its environment to a range that is much smaller than the room width. To top it off, this sensor is not ideal. The robot will traverse the room, slowly mapping the walls, and using probabilistic techniques and Monte Carlo Localization techniques (paper), described in the book, and will try to reduce the number of possible locations until it is fairly certain of its location.

In his simulation, the robot has no idea where it is location. Every little circle denotes a possible position and the little line shows the perceived direction. The robot will now proceed slowly, observing the surrounding room from different angles, andprocessing the newy acquired data, with the purpose of refining its idea of its location. In this example, the sensor parameters are set to very broad tolerances. This to show that the techniques also work with very poor sensors. 

As the robot progresses, multiple possible locations emerge as the points congregate to clusters, illustrating the robot's confidence about its position grows:

Clusters are growing:

 

In the following example, Rainer improved the sensor parameters to more real-life tolerances. This illustrates that the robot is fairly certain of its position very early on. The highest probability clusters remains focused throughout the duration of the travel:

 

In these past examples, the robot tried to determine its position in a known environment. THis is very useful in domestic robots, that roam around your house or office.

All of this is preparation for the Holy Grail of robot navigation: SLAM or Simultaneous Localization And Mapping, which is determine your position in an environment the robot is mapping at the same time. This latter becomes important in roaming unknown environments, like in Search&Rescue, exploration, bomb disposal, geologic mapping, and various military applications.

Here is a video found on Youtube that illustrates the same concepts ....